Figure: Maringal posteriors obtained from 10,000 iterations of MH and AM algorithms.

We seek the posterior parameter distribution $p(\theta|D)$ where the parameter vector $\theta = [X_0,r,\alpha,\sigma]$, implementing primarily the Metropolis Hastings algorithm, and subsequently Adaptive Metropolis. Uniform priors are chosen for each of the target parameters: $$ p(X_0) = U(0,\infty), \quad p(r)=U(0,1), \quad p(\alpha)=U(0,\infty),\quad p(\sigma) = U(0,\infty). $$


Marginal Posteriors: Simualted Data

To test the MH and AM algorithms, parameter inference was performed on data simulated from known values of $X_0, r, \alpha$ and $\sigma$. The data simulated corresponds to the forward simulation of the process discussed in Model, with $X_0=50, r=0.6$. Additionally, Gaussian noise is added with zero mean and $\sigma=0$ also.

Marginal posteriors were obtained for $X0, r$ and $\sigma$. We have ommitted $\alpha$ from the discussion henceforth as, in general, both algorithms exhibited poor inference for the parameter. As a matter of fact, Lalam, 2007 considers $\alpha$ as a known constant, and no method is specified to determine its value. Whilst in this project analagous methods for determining other parameters in the model were attempted, results were uninformative. It is in fact the case that variability in $\alpha$ can cause significant uncertainty in estimates of $X_0$, due to their co-dependency.

It is clear from comparison of the marginal posteriors obtained that the MH algorithm certainly infers parameters that are closer to those known for the simulated data. It has been found that, whilst performing well in high-dimensional systems, AM takes many iterations to 'learn' the true target covariance matrix for the transition kernel (Roberts and Rosenthal, 2009). The AM algorithm may perform better given an increased number of iterations and more time for execution, however due to time constraints and 'trial and error' required for these methods in general (see Discussion), the MH algorithm is the primary method implemented henceforth.

Figure: Maringal posteriors obtained from 10,000 iterations of MH on experimental data from single-cell qPCR experiment 10-3-17 well 253.

Marginal Posteriors: Experimental Data

The MH algorithm is implemented on data from single-cell qPCR experiment 10-3-17 well 253. Marginal posteriors obtained from 10,000 iterations are shown in Figure (INSERT). Posteriors shown here are merely indicative. True interpretation of posteriors requires analysis of chain mixing (consideration of time series and autocorrelation functions), which upon further inspection reveal the Markov chain is 'cold' (see Discussion).

To investigate the effects of increasing execution time for the algorithm, as well as considering the AM method, results for $X_0$ (the target parameter of highest importance) from MH 10,000 iterations, MH 100,000 iterations, and AM 100,000 iterations are shown for comparison in Figures (INSERT).

Figure (INSERT) indicates that executing the MH algorithm for longer does not in fact improve the marginal posterior distribution we obtain. It could be the case that the Markov chain is too cold, and simply running the algorithm for an extended time period allows the chain to veer away from the true posterior distribution. One possible solution to this problem would be to impose a stricter prior upon $X_0$, for example $p(X_0)=U(0,100)$ as opposed to $U(0,\infty)$.

Figure (INSERT) shows that, as discussed previously, implementing the AM algorithm for an increased number of iterations appears to capture the posterior more effectively. However, this posterior is significantly dissimilar to that obtained from MH for 10,000 iterations. Moreover, inspection of the time series and autocorrelation functions is required to attain if the posterior can be interpreted appropriately.

Primary adjustments that can be made to methods include increasing execution time (extending the number of iterations), and experimenting with priors. For further details of potential adjustments to implementation of methods see Discussion.

Figure:

Figure: